skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Vial, Daniel"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT Given a reversible Markov chain on states, and another chain obtained by perturbing each row of by at most in total variation, we study the total variation distance between the two stationary distributions, . We show that for chains withcutoff, converges to 0, is asymptotically at most (with a sequence of perturbations for which convergence to this bound occurs), and converges to 1, respectively, if the product of and the mixing time of converges to 0, , and , respectively. This echoes recent results for specific random walks that exhibit cutoff, suggesting that cutoff is the key property underlying such results. Moreover, we show is maximized byrestart perturbations, for which “restarts” at a random state with probability at each step. Finally, we show thatpre‐cutoffis (almost) equivalent to a notion of “sensitivity to restart perturbations,” suggesting that chains with sharper convergence to stationarity are inherently less robust. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  2. We consider a multi-agent multi-armed bandit setting in which n honest agents collaborate over a network to minimize regret but m malicious agents can disrupt learning arbitrarily. Assuming the network is the complete graph, existing algorithms incur O((m + K/n) łog (T) / Δ ) regret in this setting, where K is the number of arms and Δ is the arm gap. For m łl K, this improves over the single-agent baseline regret of O(Kłog(T)/Δ). In this work, we show the situation is murkier beyond the case of a complete graph. In particular, we prove that if the state-of-the-art algorithm is used on the undirected line graph, honest agents can suffer (nearly) linear regret until time is doubly exponential in K and n . In light of this negative result, we propose a new algorithm for which the i -th agent has regret O(( dmal (i) + K/n) łog(T)/Δ) on any connected and undirected graph, where dmal(i) is the number of i 's neighbors who are malicious. Thus, we generalize existing regret bounds beyond the complete graph (where dmal(i) = m), and show the effect of malicious agents is entirely local (in the sense that only the dmal (i) malicious agents directly connected to i affect its long-term regret). 
    more » « less
  3. Altafini, Claudio; Como, Giacomo; Hendrickx, Julien M.; Olshevsky, Alexander; Tahbez-Salehi, Alireza (Ed.)
    We study a social learning model in which agents iteratively update their beliefs about the true state of the world using private signals and the beliefs of other agents in a non-Bayesian manner. Some agents are stubborn, meaning they attempt to convince others of an erroneous true state (modeling fake news). We show that while agents learn the true state on short timescales, they "forget" it and believe the erroneous state to be true on longer timescales. Using these results, we devise strategies for seeding stubborn agents so as to disrupt learning, which outperform intuitive heuristics and give novel insights regarding vulnerabilities in social learning. 
    more » « less
  4. null (Ed.)